Technology Industry Divided over How to Govern AI Development
2023-12-08
LRC
TXT
大字
小字
滚动
全页
1Technology leaders have shown major support for laws to govern artificial intelligence use.
2At the same time, they are seeking to guarantee that any future AI rules work in their favor.
3The technology industry is increasingly divided about how to govern AI.
4One side supports an "open science" method to AI development; the other supports a closed method.
5Facebook parent Meta and IBM recently launched a new group called the AI Alliance.
6The group supports the "open science" method of AI development.
7On the other side are companies such as Google, Microsoft and ChatGPT-maker OpenAI.
8Safety is at the heart of the debate.
9But, tech leaders are also arguing about who should profit from AI developments.
10The term "open-source" comes from a common method of building software in which the code is widely available at no cost. Anyone can examine and make changes to it.
11Open-source AI involves more than just code. Computer scientists differ on how to define "open source."
12They say the identifications are dependent on which parts of the technology are publicly available and if there are restrictions on use.
13Some computer scientists use the term "open science" to describe the wider philosophy.
14IBM and Meta lead the AI Alliance. Members include Dell, Sony, chipmakers AMD and Intel, and several universities and smaller AI companies.
15The alliance is coming together to say "that the future of AI is going to be built ... on top of the open scientific exchange of ideas and on open innovation, including open source and open technologies," said Darío Gil of IBM.
16Gil made the comment in a discussion with The Associated Press.
17Part of the confusion about open-source AI is that the company that built ChatGPT and the image-generator DALL-E is called OpenAI. But its AI systems are closed.
18"There are near-term and commercial incentives against open source," said Ilya Sutskever, OpenAI's chief scientist and co-founder, in a video with Stanford University in April.
19But there is also a longer-term worry about the open development method.
20Sutskever noted one worry is that an AI system with powerful abilities could be too dangerous to make available to the public.
21For example, he described a possible AI system that could learn how to start its own biological laboratory.
22Even current AI models present risks.
23They could create disinformation campaigns, for example, said David Evan Harris of the University of California, Berkeley.
24Such campaigns could disrupt democratic elections, he said.
25"Open source is really great in so many dimensions of technology," but AI is different, Harris said.
26The Center for Humane Technology, a longtime critic of Meta's social media activities, is among the groups drawing attention to the risks of open-source or leaked AI models.
27"As long as there are no guardrails in place right now, it's just completely irresponsible to be deploying these models to the public," said the group's Camille Carlton.
28An increasingly public debate has appeared over the good and bad of using an open-source method to AI development.
29Meta's chief AI scientist, Yann LeCun, this fall criticized OpenAI, Google, and Anthropic on social media for what he described as "massive corporate lobbying."
30Le Cun argues that the companies are trying to write rules in a way that help their high-performing AI models and could help them hold their power over the technology's development.
31The three companies, along with OpenAI's key partner Microsoft, have formed their own industry group called the Frontier Model Forum.
32LeCun said on X, formerly Twitter, "Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture."
33For IBM, the dispute feeds into a much longer competition that began before the AI boom.
34IBM was an early supporter of the open-source Linux operating system in the 1990s.
35Chris Padilla leads IBM's international government affairs team.
36The companies are trying to raise fear about open-source innovation as they have in the past, he suggested.
37He added, "I mean, this has been the Microsoft model for decades, right? They always opposed open-source programs that could compete with Windows or Office. They're taking a similar approach here."
38I'm John Russell.
1Technology leaders have shown major support for laws to govern artificial intelligence use. At the same time, they are seeking to guarantee that any future AI rules work in their favor. 2The technology industry is increasingly divided about how to govern AI. One side supports an "open science" method to AI development; the other supports a closed method. 3Facebook parent Meta and IBM recently launched a new group called the AI Alliance. The group supports the "open science" method of AI development. On the other side are companies such as Google, Microsoft and ChatGPT-maker OpenAI. 4Safety is at the heart of the debate. But, tech leaders are also arguing about who should profit from AI developments. 5What is open-source AI? 6The term "open-source" comes from a common method of building software in which the code is widely available at no cost. Anyone can examine and make changes to it. 7Open-source AI involves more than just code. Computer scientists differ on how to define "open source." They say the identifications are dependent on which parts of the technology are publicly available and if there are restrictions on use. 8Some computer scientists use the term "open science" to describe the wider philosophy. 9IBM and Meta lead the AI Alliance. Members include Dell, Sony, chipmakers AMD and Intel, and several universities and smaller AI companies. The alliance is coming together to say "that the future of AI is going to be built ... on top of the open scientific exchange of ideas and on open innovation, including open source and open technologies," said Darío Gil of IBM. Gil made the comment in a discussion with The Associated Press. 10Concerns about open-source AI 11Part of the confusion about open-source AI is that the company that built ChatGPT and the image-generator DALL-E is called OpenAI. But its AI systems are closed. 12"There are near-term and commercial incentives against open source," said Ilya Sutskever, OpenAI's chief scientist and co-founder, in a video with Stanford University in April. 13But there is also a longer-term worry about the open development method. Sutskever noted one worry is that an AI system with powerful abilities could be too dangerous to make available to the public. 14For example, he described a possible AI system that could learn how to start its own biological laboratory. 15Even current AI models present risks. They could create disinformation campaigns, for example, said David Evan Harris of the University of California, Berkeley. Such campaigns could disrupt democratic elections, he said. 16"Open source is really great in so many dimensions of technology," but AI is different, Harris said. 17The Center for Humane Technology, a longtime critic of Meta's social media activities, is among the groups drawing attention to the risks of open-source or leaked AI models. 18"As long as there are no guardrails in place right now, it's just completely irresponsible to be deploying these models to the public," said the group's Camille Carlton. 19Benefits and dangers 20An increasingly public debate has appeared over the good and bad of using an open-source method to AI development. 21Meta's chief AI scientist, Yann LeCun, this fall criticized OpenAI, Google, and Anthropic on social media for what he described as "massive corporate lobbying." Le Cun argues that the companies are trying to write rules in a way that help their high-performing AI models and could help them hold their power over the technology's development. The three companies, along with OpenAI's key partner Microsoft, have formed their own industry group called the Frontier Model Forum. 22LeCun said on X, formerly Twitter, "Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture." 23For IBM, the dispute feeds into a much longer competition that began before the AI boom. IBM was an early supporter of the open-source Linux operating system in the 1990s. 24Chris Padilla leads IBM's international government affairs team. The companies are trying to raise fear about open-source innovation as they have in the past, he suggested. 25He added, "I mean, this has been the Microsoft model for decades, right? They always opposed open-source programs that could compete with Windows or Office. They're taking a similar approach here." 26I'm John Russell. 27Matt O'Brien reported on this story for the Associated Press. John Russell adapted it for VOA Learning English. 28__________________________________________________ 29Words in This Story 30innovation - n. the act of introducing new ideas, devices, or methods 31incentive - n. something that encourages a person to do something 32disrupt - v. to interrupt the normal progress or activity of something 33dimension - n. a part of something 34guardrail - n. a protective device along the side of a road that prevents vehicles from driving off the road (can be used metaphorically) 35lobby - v. to try to influence government officials to make decisions for or against something